Ying Zhao, Central South University, Zhaoying511@gmail.com PRIMARY
Xing Liang, Central South University, csushin1004@gmail.com
Yiwen Wang, Central South University, evenwang.king@gmail.com
Mengjie Yang, Central South University, yangmengjie11@126.com
Fangfang Zhou, Central South University, zhouffang@gmail.com
Xiaoping Fan, Central South University, xpfan@csu.edu.cn
Student Team: NO
Processing
MySQL
Eclipse
May we post your submission in the
Visual Analytics Benchmark Repository after VAST Challenge 2013 is complete? YES
Video:
index.files/CSU-Zhao-MC3-Demo-Final.wmv
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Questions
MC3.1 – Provide a timeline (i.e., events organized in
chronological order) of the notable events that occur in Big Marketing’s
computer networks for the two weeks of supplied data. Use all data at your
disposal to identify up to twelve events and describe them to the extent
possible. Your answer should be no more
than 1000 words long and may contain up to twelve images.
Event
1: Disk alert began at 7:40 April 1st and lasted two-weeks long
Beginning from 7:40 April 1st, the disk alert
detected by Big Brother has been emerging from Web server 172.10.0.4 and admin
server 172.10.0.40 for two weeks. 172.10.0.4 kept
reporting alert for two weeks while 172.10.0.40 got well in the second week.
Looking into the logs, we find that both of the servers’ disks have been
occupied over 90% in the first week, and that of 172.10.0.40 in the second week
has decreased to 80%(see Figure 1).
Figure
1
Event
2: DDoS Attack from 5:14 to 7:00 on April 2nd
Through over 60,000 source ports, ten external hosts like
10.6.6.14, 10.6.6.6, 10.6.6.13, 10.7.7.10 started the DDoS
attack to port 80 of internal Web server 172.30.0.4. Subsequently, every
morning in the first week various external hosts managed to install numbers of
connections to port 80 of internal Web servers through numerous different
source ports. While the connection alert kept emerging in the Big Brother logs,
the number of the connection alert had a significant growth after the DDoS attack on April 2nd.
Besides, the CPU alert generated in the morning of April 1st and
April 2nd is also noteworthy (see Figure 2).
Figure
2
Event
3: Down-time of 172.30.0.4 from 16:00 April 3rd to 6:00 April 5th
Beginning from 9 o’clock April 3rd, a dozen of
external hosts started the DDoS attack to
port 80 of Web servers 172.30.0.4 and 172.20.0.15. For instance, hosts,
10.47.55.6, 10.47.56.7, 10.11.106.5, 10.10.11.15, have kept attacking for two
hours. Soon afterwards, 172.30.0.4 seems to go down for two-days long and by
searching the Big Brother logs during that period, we cannot find any record
about cpu, memory and pagefile
of 172.30.0.4 and its flows data kept zero(see Figure 3).
Figure
3
Event
4: Explosion of SMTP activities at 7:00 April 6th
In the first week, all SMTP activities happened between
internal mail servers 172.10.0.3, 172.20.0.3, 172.30.0.3 and external hosts
10.3.1.25. The event that three internal mail servers sending packets to port
25 of 10.3.1.25 have took place for many times in the first week and the record
count reached to climax at 11:00 o’clock April 6th. Furthermore, the
only SMTP alert recorded in Big Brother logs happened at 10:00 o’clock April 2nd
from 172.30.0.3. The number of activities of port 25 of those three mail
servers is about twice the number of that in the second week, which implies the
high risk of disseminating spam (see Figure 4).
Figure
4
Event
5: A network eruption of ftp flow data at 10:36 April 6th and 7:00
April 7th
Hosts 10.7.5.5 and 172.10.0.40 generated a network
traffic eruption at 10:36 April 6th and 7:00 April 7th by
ftp ports. Moreover all ftp connections related to 10.7.5.5 were denied by IPS.
And the Netflow record count of ftp port in the first
week was far more than that in the second week. What attracts us most is that
every morning in the first week, there always had connections through ftp port
and mainly happened between hosts 172.10.0.* and 10.199.150.2 besides several
connections between 172.10.0.40 and 10.199.250.2 (see Figure 5).
Figure
5
Event
6: Destination ports scan and activities on port 3389 from 11:05 April 6th
to 3:20 April 7th
From 11:05 April 6th to 3:20 April 7th,
external hosts, 10.9.81.5 and 10.10.11.15, have scanned large majority of ports
of internal servers, especially for 172.10.0.40, through a small number of
source ports. These two hosts made use of port 51462, 45132 and 62625
respectively to connect port 137 and other large-number ports of internal
servers by UDP. Besides, 10.9.81.5 also scanned the destination port 3389 of
internal servers at 11:00 o’clock and 12:00 o’clock on April 6th and
the port 3389 behaved rather actively in the second week (see Figure 6).
Figure
6
Event
7: The first eruption of denied connection from IPS (Firewall) at 11:00 April
10th
Comparing the number of inbound and outbound records in
IPS log, the access number from intranet to extranet every morning in the
second week reached a small level of peak and the access from extranet to
intranet seems to be significantly unusual at 12:00 April 11th. IPS
denied a massive number of TCP Flag exception packet sending from external IPs
to internal mapping IPs. We also find two traffic volume explosions in Netflow after two maximum denied connection number at 11:00
April 11th and 12:00 April 14th which reveals that IPS
failed to resist the attack. The first increasing of denied connection in IPS
log occurred from 11:00 to 17:00 on April 10th. 10.138.235.111,
10.13.77.49, 10.6.6.7, 10.12.15.152 tried to connect internal mapping IPs,
especially for three DNS servers, for a long time but were denied and the
majority of their destination ports were 3389 and 80 (see Figure 7).
Figure
7
Event
8: DDoS attack from
11:55 April 11th to 12:55 April 12th
Beginning from 11:55 April 11th, over twenty
external IPs, such as 10.6.6.7, 10.247.58.182, 10.78.100.150, 10.170.32.181,
10.170.32.110, 10.15.7.85 and 10.156.165.120, used over 60,000 different source
ports and raised DDoS attack to port 80 and 3389 of
Web servers 172.10.0.4, 172.20.0.4, 172.30.0.4 and 172.20.0.15 (see Figure 8).
Figure
8
Event
9: Eight suspicious internal hosts and SSH protocol activity from 8:00 April 12th
to 5:00 April 15th
At 8:14 April 12th, eight suspicious internal
hosts accessed external host 10.4.20.9 which has only appeared once in the log.
Beginning from 8:28 April 12th, these eight internal hosts started
accessing the port 22 of external host 10.0.3.77 regularly and the accessing
number to 10.0.0.4~10.0.0.14 is much larger than that to other workstations.
Also, these internal hosts once have accessed 10.1.0.100 and server 172.20.0.3
has accessed 10.0.3.77. Hence these eight internal hosts, 172.10.2.106,
172.10.2.66, 172.10.2.135, 172.20.1.81, 172.20.1.23, 172.20.1.47, 172.30.1.218,
172.30.1.223, are noteworthy (see Figure 9).
Figure
9
Event
10: Abnormal flow data and related IP activities, especially at 23:55 April 13th
Three external IPs, 10.1.0.75~10.1.0.77, have appeared in
both weeks and many internal workstations accessed to the port 80 of these
three IPs for tremendous times. Between 10:00 April 12th and 0:00
April 13th, 172.30.1.* generated abnormal enormous traffic volume to
destination port 80. Between 23:10 April 13th and 1:50 April 14th,
172.10.1.100~172.10.1.199 also generated unusual enormous traffic volume to
destination port 80 and destination IPs 10.1.0.75~10.1.0.77 and broadcast
addresses (see Figure 10).
Figure
10
Event
11: DDoS attack from
14:14 to 15:18 on April 14th
Over twenty external hosts IPs made use of more than
60,000 source ports to raise DDoS attack to
port 80 and 3389 of internal servers, 172.10.0.4, 172.20.0.4, 172.30.0.4, 172.20.0.15.
Some of the external hosts IPs are 10.12.14.15, 10.6.6.7, 10.170.32.110,
10.0.0.42, 10.13.77.49 (see Figure 11).
Figure
11
Event
12: Possible maintenance of network during 23:49 April 14th to 1:43
April 15th
There are only a small number of records in the Firewall
and Netflow logs along with few protocols records,
UDP and other, from 23:49 April 14th to 1:43 April 15th. Hence
we guess it was the network maintenance leading to this phenomenon (see Figure
12).
Figure
12
MC3.2 – Speculate on one or more narratives that describe the
events on the network. Provide a list of analytic hypotheses and/or unanswered
questions about the notable events. In other words, if you were to hand off
your timeline to an analyst who will conduct further investigation, what
confirmations and/or answers would you like to see in their report back to you?
Your answer should be no more than 300 words long and may contain up to three
additional images.
Question
1: Puzzles about root cause and story
It is very likely that the network has already received
intrusions already early in the first week as some external hosts tried to
attack the company. Until the last two days of the first week, network
conditions got severe and kept worsening in the second week. Even though the
situations between two weeks are slightly different, there are three
uniformities for the whole network: apparent cooperated attack, suspicious data
transfers, some highly risky ports and frequent IP activities. Considering from
user experience, they may meet network blocking, server exception and spam. By
further speculating, we could attribute these phenomena to non-IRC botnet and
the intrusion of worms and viruses. However, these speculations need to be
configured by managers who are more familiar with the network topology and
configurations, especially for the correlations of multiple abnormal events
(Timeline of events see Figure 13).
Figure
13
Question
2: Puzzles about some highly risky UDP activities
In the whole two weeks, there are several frequent UDP
activities to ports such as port 123 which offers NTP service, port 137 and 138
which offer NetBIOS service and port 1990 which offers UPnP service. Even
though to some extent these port activities were regular, we cannot avoid
ignoring some abnormal behaviors that puzzles us for a long time. For example,
why the recorded activity numbers of port 1900 which offers UPnP service were
so large and why did port 1900 behave such frequently in every morning? Moreover,
why port 123 behaved more active in the second week than behaved in the evening
on April 12th? Thereby, in my opinion, these UDP ports could be
utilized by worms and viruses easily and operation managers are supposed to pay
more attention on these ports (see Figure 14).
Figure
14
Question
3: Puzzles about suspicious internal unregistered IPs
Even though the company provides us with an internal registered
IP list, we still find some suspicious unregistered IPs in the logs, such as
IPs, 172.10.0.50, 172.10.0.6, similar to legal servers, IPs ranged from
172.20.1.10 to 172.20.1.200 similar to legal workstations, and other IPs like
192.168.1.151, 192.168.3.15, 192.168.3.16, 192.168.3.17 and 192.168.3.18. It
would be best for the administrators to verify that why these
unregistered IPs emerged in logs (see Figure 15).
Figure
15
MC3.3 – Describe the role that your visual analytics played in
enabling discovery of the notable events in MC3.1. Describe whether your visual
analytics play a role in formulating the questions in MC3.2. Your answer should
be no more than 300 words long and may contain up to three additional images.
Within the feature of cooperative
analysis on multiple source data and multiple views, our visualization tool has
the advantage of discovering various anomalous faster and more precise. Looking
at network events from multiple perspectives by incorporating different data
sources into our tool, users could get a richer insight to precisely find
various underlying anomalous faster. By checking the whole network situation
and monitoring in real time, users can first find which type the occurring
anomalous is and when the anomalous began. Afterwards, more information about
the attacked hosts, port numbers could be captured by further interactions. In
the end, we can find the origin of the anomalous from corresponding logs.
Moreover, users are able to explore the correlations among many anomalous by
using our visualization tool. To further speculate the causal relationship of
various anomalous, even to master the whole situation of the network, we, to be
honest, still need a more experienced operation manager.